Goto

Collaborating Authors

 AAAI AI-Alert Ethics for Sep 1, 2020


Google offers to help others with the tricky ethics of AI

#artificialintelligence

Companies pay cloud-computing providers like Amazon, Microsoft, and Google big money to avoid operating their own digital infrastructure. Google's cloud division will soon invite customers to outsource something less tangible than CPUs and disk drives--the rights and wrongs of using artificial intelligence. The company plans to launch new AI ethics services before the end of the year. Initially, Google will offer others advice on tasks such as spotting racial bias in computer vision systems or developing ethical guidelines that govern AI projects. Longer term, the company may offer to audit customers' AI systems for ethical integrity and charge for ethics advice.


Physicists Must Engage with AI Ethics, Now

#artificialintelligence

Popular media depictions of AI often involve apocalyptic visions of killer robots, humans mined for resources, or the elimination of the human race altogether. Even the rosier visions of an AI-driven world imagine most traditional human efforts and behaviors replaced with machines. Our collective imaginations of AI are often focused on this "singularity"--an irreversible point when technology overtakes humanity. However, the realization of this kind of artificial general intelligence (AGI), where a machine can perform any cognitive task a human can, remains a long way off. It is true that there have been impressive advances in machine learning (ML) and that algorithms can now best humans at various tasks.


Derisking AI by design: How to build risk management into AI development

#artificialintelligence

Artificial intelligence (AI) is poised to redefine how businesses work. Already it is unleashing the power of data across a range of crucial functions, such as customer service, marketing, training, pricing, security, and operations. To remain competitive, firms in nearly every industry will need to adopt AI and the agile development approaches that enable building it efficiently to keep pace with existing peers and digitally native market entrants. But they must do so while managing the new and varied risks posed by AI and its rapid development. The reports of AI models gone awry due to the COVID-19 crisis have only served as a reminder that using AI can create significant risks.


Regulation of Artificial Intelligence in Europe and Japan

#artificialintelligence

Enterprises around the world are rapidly incorporating artificial intelligence (AI) into existing and new products and processes. This effort is not just to improve such offerings and services, but to achieve a qualitatively higher level of capability not possible before. It is clear that AI carries the potential for many new opportunities, across all industries, but it is also already recognized that it brings numerous risks as well. As with any technology, senior management and board directors need to be aware of both the opportunity and the risk in order to successfully and responsibly manage the enterprise. The opportunities are great--AI can assist in robotic process automation (RPA), machine learning, natural language processing, finding new drugs and therapies, and will be essential for driverless transportation--but if the risks are downplayed or overlooked, there can be serious reputational and/or legal consequences.


How AI is helping employers with hiring

#artificialintelligence

This is the first in a three-part series. In the already fast-changing world of HR, the ongoing COVID-19 pandemic is creating unimagined twists and turns as 2020 progresses, leading to unprecedented attention on HR technology to help employers manage these new challenges. No emerging technology arguably has had more impact on the evolution and refinement of the pandemic workplace than artificial intelligence--which is expected to continue in the months and years ahead. One HR area that has benefited the most from AI-based solutions is workforce management, mainly in recruiting for employers whose business sectors continued to thrive, or in managing challenges such as furloughs and layoffs for the sectors hit hardest by COVID-19. According to Greg Moran, CEO at OutMatch, a SaaS-based talent intelligence platform, the movement toward HR digitization, with the use of AI and machine learning, was already well underway at the start of the year.


Trust In Artificial Intelligence, But Not Blindly

#artificialintelligence

Imagine the following situation: A company wants to teach an artificial intelligence (AI) to recognise a horse on photos. To this end, it uses several thousand images of horses to train the AI until it is able to reliably identify the animal even on unknown images. The AI learns quickly โ€“ it is not clear to the company how it is making its decisions but this is not really an issue for the company. It is simply impressed by how reliably the process works. Researchers talk in these cases about confounders โ€“ which are confounding factors that should actually have nothing to do with the identification process.


Countries are Demanding an International Treaty to Ban 'Killer Robots'

#artificialintelligence

The report, which is a compilation of 97 countries' position on fully automated weapons, says most of them want to "retain human control over the use of force". Additionally, a growing number of policymakers, artificial intelligence experts, private companies, international and domestic organisations, and ordinary individuals have also endorsed the call to ban fully autonomous weapons. The authors explain that autonomous weapons "would decide who lives and dies, without โ€ฆ inherently human characteristics such as compassion that are necessary to make complex ethical choices."


World must come together to stop killer robots, experts urge

The Independent - Tech

The world must come together to take action on killer robots, according to a new report. There is increasing agreement among various countries that fully autonomous weapons should be banned to avoid the creation of such killer robots, the new report warns. It would be "unacceptable" if weapons systems are able to select and kill targets without human oversight, the researchers warn. The research by Human Rights Watch said 30 countries had now expressed a desire for an international treaty introduced which says human control must be retained over the use of force. The new report, "Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control", reviews the policies of 97 countries that have publicly discussed killer robots since 2013.


UNESCO launches global consultation for 'ethics of AI' draft guidelines

#artificialintelligence

To help build a draft resolution on how AI can be developed and deployed, UNESCO is seeking global policymakers and AI experts. The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has said that there is an urgent need for a global instrument on the ethics of AI to ensure those who it is used by and used with are treated fairly and equally. Now it has announced the launch of a global online consultation led by a group of 24 experts in AI charged with writing a first draft on a'Recommendation on the Ethics of AI' document. It's hoped that UNESCO member states would adopt its recommendations by November 2021, thereby becoming the first global normative instrument to address the developments and applications of AI. If the recommendation is adopted, these nations will be invited to submit periodic reports every four years on the measures that they have adopted.


Efforts to understand impact of AI on society put pressure on biometrics industry to sort out priorities, role

#artificialintelligence

Companies involved in face biometrics and other artificial intelligence applications have not come to a consensus on what ethical principles to prioritize, which may cause problems for them as policymakers move to set regulations, according to a new report from EY. Facial recognition check-ins for venues such as airports, hotels and banks, and law enforcement surveillance, including the use of face biometrics, are two of a dozen specific use cases considered in the study. The report'Bridging AI's trust gaps' was developed by EY in collaboration with The Future Society, suggests companies developing and providing AI technologies are misaligned with policymakers, which is creating new risks for them. Third parties may have a role to play in bridging the trust gap, such as with an equivalent to'organic' or'fairtrade' labels, EY argues. For biometric facial recognition, 'fairness and avoiding bias' is the top priority for policymakers, followed by'privacy and data rights' and'transparency.' Among companies, privacy and data rights tops the list followed by'safety and security,' and then transparency.